8 research outputs found
The Locus Algorithm II: A robust software system to maximise the quality of fields of view for Differential Photometry
We present the software system developed to implement the Locus Algorithm, a
novel algorithm designed to maximise the performance of differential photometry
systems by optimising the number and quality of reference stars in the Field of
View with the target. Firstly, we state the design requirements, constraints
and ambitions for the software system required to implement this algorithm.
Then, a detailed software design is presented for the system in operation.
Next, the data design including file structures used and the data environment
required for the system are defined. Finally, we conclude by illustrating the
scaling requirements which mandate a high-performance computing implementation
of this system, which is discussed in the other papers in this series
The Locus Algorithm IV: Performance metrics of a grid computing system used to create catalogues of optimised pointings
This paper discusses the requirements for and performance metrics of the the
Grid Computing system used to implement the Locus Algorithm to identify optimum
pointings for differential photometry of 61,662,376 stars and 23,779 quasars.
Initial operational tests indicated a need for a software system to analyse the
data and a High Performance Computing system to run that software in a scalable
manner. Practical assessments of the performance of the software in a serial
computing environment were used to provide a benchmark against which the
performance metrics of the HPC solution could be compared, as well as to
indicate any bottlenecks in performance. These performance metrics indicated a
distinct split in the performance dictated more by differences in the input
data than by differences in the design of the systems used. This indicates a
need for experimental analysis of system performance, and suggests that
algorithmic complexity analyses may lead to incorrect or naive conclusions,
especially in systems with high data I/O overhead such as grid computing.
Further, it implies that systems which reduce or eliminate this bottleneck such
as in-memory processing could lead to a substantial increase in performance
Recommended from our members
The Locus Algorithm II: A robust software system to maximise the quality of fields of view for Differential Photometry
We present the software system developed to implement the Locus Algorithm, a
novel algorithm designed to maximise the performance of differential photometry
systems by optimising the number and quality of reference stars in the Field of
View with the target. Firstly, we state the design requirements, constraints
and ambitions for the software system required to implement this algorithm.
Then, a detailed software design is presented for the system in operation.
Next, the data design including file structures used and the data environment
required for the system are defined. Finally, we conclude by illustrating the
scaling requirements which mandate a high-performance computing implementation
of this system, which is discussed in the other papers in this series
Recommended from our members
The Locus Algorithm IV: Performance metrics of a grid computing system used to create catalogues of optimised pointings
This paper discusses the requirements for and performance metrics of the the
Grid Computing system used to implement the Locus Algorithm to identify optimum
pointings for differential photometry of 61,662,376 stars and 23,779 quasars.
Initial operational tests indicated a need for a software system to analyse the
data and a High Performance Computing system to run that software in a scalable
manner. Practical assessments of the performance of the software in a serial
computing environment were used to provide a benchmark against which the
performance metrics of the HPC solution could be compared, as well as to
indicate any bottlenecks in performance. These performance metrics indicated a
distinct split in the performance dictated more by differences in the input
data than by differences in the design of the systems used. This indicates a
need for experimental analysis of system performance, and suggests that
algorithmic complexity analyses may lead to incorrect or naive conclusions,
especially in systems with high data I/O overhead such as grid computing.
Further, it implies that systems which reduce or eliminate this bottleneck such
as in-memory processing could lead to a substantial increase in performance
GPU simulation with Opticks: The future of optical simulations for LZ
The LZ collaboration aims to directly detect dark matter by using a liquid xenon Time Projection Chamber (TPC). In order to probe the dark matter signal, observed signals are compared with simulations that model the detector response. The most computationally expensive aspect of these simulations is the propagation of photons in the detector’s sensitive volume. For this reason, we propose to offload photon propagation modelling to the Graphics Processing Unit (GPU), by integrating Opticks into the LZ simulations workflow. Opticks is a system which maps Geant4 geometry and photon generation steps to NVIDIA’s OptiX GPU raytracing framework. This paradigm shift could simultaneously achieve a massive speed-up and an increase in accuracy for LZ simulations. By using the technique of containerization through Shifter, we will produce a portable system to harness the NERSC supercomputing facilities, including the forthcoming Perlmutter supercomputer, and enable the GPU processing to handle different detector configurations. Prior experience with using Opticks to simulate JUNO indicates the potential for speed-up factors over 1000× for LZ, and by extension other experiments requiring photon propagation simulations